138 research outputs found

    Translating instructive texts

    Get PDF
    Starting with Werlich (1982), many researchers within text linguistics and document design see instructive texts as a category that is different from persuasive texts. Others do not include either in their main typologies (e.g. Bonnet et al. 2001).  This paper will claim, however, that instructions are a particular subtype of persuasive texts: instructing people is also persuading them to do something in a particular way or in a particular situation or in a particular order. Consequently, all features characteristic of persuasion (e.g. Aristotle 4th c. BC, Bettinghaus 1968, Dacheux 1994, Whalen 1996) also appear in instructive texts. Drawing from a learner corpus of materials used in the Trans-Atlantic Tech Writing / Translation Project (Maylath et al., 2005, in press), in which Flemish students translate English instructive texts written by American students into Dutch, the paper will discuss the problems involved in the translation of two relevant persuasive characteristics of instructive texts: expertise and positive audience-orientation. For the former, attention will be paid to message form, structure and strategy, while the latter will lead to considerations of both individual interpretation differences and cultural differences

    Translation product quality: A conceptual analysis

    Get PDF
    Against a background in which both the translation product and translation process are briefly described as objects of quality assessment, this chapter presents an analysis of the concept of translation quality assessment focussing on the translation product. The following features will be presented as parameters along which product quality assessment practices in institutions can be described: the purpose of translation quality assessment, the criteria applied in the assessment, combined with their scaling and weighting, the translation quality levels aimed at, and the quality assessors involved. The characteristics will be illustrated by the translation quality assessment as applied in one Belgian institution. It is hoped that the analysis will lead to a fuller and deeper understanding of a translation's quality

    Collaboration in translation practices : a course design for peer feedback in translation training

    Get PDF
    In response to collaboration being an issue on the agenda of many a professional translation organization in recent years, a fair number of translation trainers have argued for innovative approaches which would enhance the trainees' collaboration skills (e.g. Gambier 2012; Huertas Barros 2011; Kenny 2008; Kiraly 2000/2014; O’Brien 2011). Against a background of more initiatives for collaboration in the Dutch-speaking translation industry, the design of a collaborative translation exercise at Ghent University will be described. This exercise involved students not only collaborating with each other in class, but also at home online either with each other or with students from North Dakota State University at Fargo. Amongst other items, the article will cover learning outcomes, preparatory exercises, an introduction to peer feedback, and a description of class activities. Readers are invited to share their comments or accounts of their own experiences with the writer

    Two sides of the same coin : assessing translation quality in two steps through adequacy and acceptability error analysis

    Get PDF
    We propose facilitating the error annotation task of translation quality assessment by introducing an annotation process which consists of two separate steps that are similar to the ones required in the European Standard for translation companies EN 15038: an error analysis for errors relating to acceptability (where the target text as a whole is taken into account, as well as the target text in context), and one for errors relating to adequacy (where source segments are compared to target segments). We present a fine-grained error taxonomy suitable for a diagnostic and comparative analysis of machine translated texts, post-edited texts and human translations. Categories missing in existing metrics have been added, such as lexical issues, coherence issues, and text type-specific issues

    Identifying the machine translation error types with the greatest impact on post-editing effort

    Get PDF
    Translation Environment Tools make translators' work easier by providing them with term lists, translation memories and machine translation output. Ideally, such tools automatically predict whether it is more effortful to post-edit than to translate from scratch, and determine whether or not to provide translators with machine translation output. Current machine translation quality estimation systems heavily rely on automatic metrics, even though they do not accurately capture actual post-editing effort. In addition, these systems do not take translator experience into account, even though novices' translation processes are different from those of professional translators. In this paper, we report on the impact of machine translation errors on various types of post-editing effort indicators, for professional translators as well as student translators. We compare the impact of MT quality on a product effort indicator (HTER) with that on various process effort indicators. The translation and post-editing process of student translators and professional translators was logged with a combination of keystroke logging and eye-tracking, and the MT output was analyzed with a fine-grained translation quality assessment approach. We find that most post-editing effort indicators (product as well as process) are influenced by machine translation quality, but that different error types affect different post-editing effort indicators, confirming that a more fine-grained MT quality analysis is needed to correctly estimate actual post-editing effort. Coherence, meaning shifts, and structural issues are shown to be good indicators of post-editing effort. The additional impact of experience on these interactions between MT quality and post-editing effort is smaller than expected

    Learning localization through Trans-Atlantic collaboration: bridging the gap between professions

    Get PDF
    In light of what has taken place since their presentation at the IEEE International Professional Communication Conference in 2005, the authors describe additional requirements and merits of matching technical writing students in the US with translation students in Europe in a collaborative assignment. Where the original article dealt with how to set up and organize the collaboration, this tutorial delves into the pedagogical challenges and the process dynamics involved in such an exchange, including mediation, power, and teamwork issues

    The impact of machine translation error types on post-editing effort indicators

    Get PDF
    In this paper, we report on a post-editing study for general text types from English into Dutch conducted with master's students of translation. We used a fine-grained machine translation (MT) quality assessment method with error weights that correspond to severity levels and are related to cognitive load. Linear mixed effects models are applied to analyze the impact of MT quality on potential post-editing effort indicators. The impact of MT quality is evaluated on three different levels, each with an increasing granularity. We find that MT quality is a significant predictor of all different types of post-editing effort indicators and that different types of MT errors predict different post-editing effort indicators

    Translation methods and experience : a comparative analysis of human translation and post-editing with students and professional translators

    Get PDF
    While the benefits of using post-editing for technical texts have been more or less acknowledged, it remains unclear whether post-editing is a viable alternative to human translation for more general text types. In addition, we need a better understanding of both translation methods and how they are performed by students as well as professionals, so that pitfalls can be determined and translator training can be adapted accordingly. In this article, we aim to get a better understanding of the differences between human translation and post-editing for newspaper articles. Processes were registered by means of eye tracking and keystroke logging, which allows us to study translation speed, cognitive load, and the usage of external resources. We also look at the final quality of the product as well as translators' attitude towards both methods of translation
    corecore